policy statement
PolicyPad: Collaborative Prototyping of LLM Policies
Feng, K. J. Kevin, Kuo, Tzu-Sheng, Ze, Quan, Chen, null, Cheong, Inyoung, Holstein, Kenneth, Zhang, Amy X.
As LLMs gain adoption in high-stakes domains like mental health, domain experts are increasingly consulted to provide input into policies governing their behavior. From an observation of 19 policymaking workshops with 9 experts over 15 weeks, we identified opportunities to better support rapid experimentation, feedback, and iteration for collaborative policy design processes. We present PolicyPad, an interactive system that facilitates the emerging practice of LLM policy prototyping by drawing from established UX prototyping practices, including heuristic evaluation and storyboarding. Using PolicyPad, policy designers can collaborate on drafting a policy in real time while independently testing policy-informed model behavior with usage scenarios. We evaluate PolicyPad through workshops with 8 groups of 22 domain experts in mental health and law, finding that PolicyPad enhanced collaborative dynamics during policy design, enabled tight feedback loops, and led to novel policy contributions. Overall, our work paves participatory paths for advancing AI alignment and safety.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (17 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
- Instructional Material > Course Syllabus & Notes (0.45)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Education (1.00)
- Government > Regional Government > North America Government > United States Government (0.92)
Hong Kong preparing policy statement for artificial intelligence in finance
The Hong Kong government is preparing to issue its maiden policy statement on the use of artificial intelligence in finance, according to people familiar with the matter, in a move that could catalyze the use of the technology in areas from trading to investment banking and cryptocurrencies. The city's Financial Services and Treasury Bureau plans to issue a framework of guidelines to touch on the ethical use of AI and general principles for applying the technology in the finance world, the people said, asking not to be identified discussing private information. Officials are still drafting the document while getting feedback from the industry, the people said. Details are still subject to change in the coming weeks, they added. While specifics remain unclear, the document is broadly intended to signal Hong Kong's support for AI, as governments around the world get to grips with the technology's potential.
- Banking & Finance > Trading (0.67)
- Government > Regional Government > Asia Government > China Government > Hong Kong Government (0.57)
Bias in LLMs as Annotators: The Effect of Party Cues on Labelling Decision by Large Language Models
Vera, Sebastian Vallejo, Driggers, Hunter
The increasing sophistication of large language models (LLMs) has allowed for their more prominent presence in political science research. One particular area gathering significant attention in the field is the use of LLMs as annotators. Research has shown promising results, with LLMs often outperforming human coders (Gilardi, Alizadeh and Kubli, 2023) and providing comparable accuracy when labelling political text, across multiple languages (Heseltine and Clemm von Hohenberg, 2024). While researchers have evaluated the performance of LLMs as annotators across different domains, there still little information on how the known biases of LLMs (see Gallegos et al., 2024) can affect their performance. For human annotators, studies show that political cues, such as party, have an effect on their coding decisions (Laver and Garry, 2000; Benoit et al., 2016; Ennser-Jedenastik and Meyer, 2018).
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
Embracing the Generative AI Revolution: Advancing Tertiary Education in Cybersecurity with GPT
The rapid advancement of generative Artificial Intelligence (AI) technologies, particularly Generative Pre-trained Transformer (GPT) models such as ChatGPT, has the potential to significantly impact cybersecurity. In this study, we investigated the impact of GPTs, specifically ChatGPT, on tertiary education in cybersecurity, and provided recommendations for universities to adapt their curricula to meet the evolving needs of the industry. Our research highlighted the importance of understanding the alignment between GPT's ``mental model'' and human cognition, as well as the enhancement of GPT capabilities to human skills based on Bloom's taxonomy. By analyzing current educational practices and the alignment of curricula with industry requirements, we concluded that universities providing practical degrees like cybersecurity should align closely with industry demand and embrace the inevitable generative AI revolution, while applying stringent ethics oversight to safeguard responsible GPT usage. We proposed a set of recommendations focused on updating university curricula, promoting agility within universities, fostering collaboration between academia, industry, and policymakers, and evaluating and assessing educational outcomes.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Oceania > New Zealand > North Island > Waikato (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
UK lays out regulatory model for Artificial Intelligence
The UK is setting the stage for its future Artificial Intelligence (AI) regulatory model. Much like the EU, it suggests adopting a risk-based approach but will differ from the bloc by entrusting enforcement to a panel of regulators. The British government presented its "pro-innovation approach to regulating AI" on Monday (18 July) alongside its new Data Protection and Digital Information Bill. It follows the presentation of the National AI Strategy last September, a ten-year plan to ensure the UK becomes a global AI superpower. The country has invested more than £2.3 billion (€2.7 billion) in AI since 2014.
Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory
Kim, Jungwon, Greensmith, Julie, Twycross, Jamie, Aickelin, Uwe
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > District of Columbia > Washington (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)